Universal Source Codes Lecturer : Himanshu Tyagi Scribe : Sandip Sinha
نویسندگان
چکیده
• Huffman Code (optimal prefix-free code) • Shannon-Fano code • Shannon-Fano-Elias code • Arithmetic code (can handle a sequence of symbols) In general, the first three codes do not achieve the optimal rate H(X), and there are no immediate extensions of these codes to rate-optimal codes for a sequence of symbols. On the other hand, arithmetic coding is rate-optimal. However, all these schemes assume knowledge of the distribution. We will now study universal source codes. In the case of universal source codes, the distribution of the source P is unknown. We look at coding schemes for compression in several regimes: 1. Fixed length codes attaining the optimal error exponent-We have already seen this as an application of the Method of Types. 2. Variable length codes which attain: • Minimax optimality over a family of distributions • Minimax optimality for worst case individual sequence, when compared with a class of expert algorithms (for instance, all iid distributions over the alphabet) For the rest of the lecture, Q n will be an element of P(X n), i.e. a pmf on X n , Θ will refer to a family of distributions indexed by the parameter θ, and x will denote an element of X sequence of length n.
منابع مشابه
Lecture 33 : Background on Estimation Instructor : Himanshu Tyagi Scribe : Kishan P B 04 - Nov - 2015
متن کامل
E 2 201 : Information Theory ( 2015 ) Solution to Homework 6 Prepared by : Himanshu Tyagi
(b) Let PX(1) = p = 1−PX(0). Then I(X∧Y ) = H(Y )−H(Y |X) = h(p2)−p. Differentiation yields that I(X ∧ Y ) is maximized by p = 2e−2 1+e−2 . Hence, the maximizing input pmf on {0, 1} is ( 1−e−2 1+e−2 , 2e−2 1+e−2 ) and C = h ( e−2 1+e−2 ) − 2e−2 1+e−2 bits/channel use. ∗Q2,3,4,6,7 and their solutions are from the Information Theory course taught by Prof. Prakash Narayan at the University of Mary...
متن کاملAdaBoost Lecturer : Shivani Agarwal Scribe : Anil C R 1 Equivalence of Models for Polynomial ( Efficient ) Learnability
Let H = ∪n=1Hn, where Hn ⊂ {±1}Xn , define the complexity as the mapping: Complexity : H→N and, Hn,s = {h ∈ H | Complexity(h) ≤ s}. H is efficiently (strongly) learnable if there exists a algorithm A defined as a mapping A : ∪n=1 ∪s=1 ∪t∈Hn,s(Xn × t)→{±1} which, given any set S ∈ (Xn × t) and, returns hS ∈ {±1}Xn s.t, ∀ and δ ∈ (0, 1),∃m0 = m0( , δ, n, s) s.t for all distributions μ on Xn, t ∈ ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2015